272 research outputs found

    ff-Divergence Inequalities via Functional Domination

    Full text link
    This paper considers derivation of ff-divergence inequalities via the approach of functional domination. Bounds on an ff-divergence based on one or several other ff-divergences are introduced, dealing with pairs of probability measures defined on arbitrary alphabets. In addition, a variety of bounds are shown to hold under boundedness assumptions on the relative information. The journal paper, which includes more approaches for the derivation of f-divergence inequalities and proofs, is available on the arXiv at https://arxiv.org/abs/1508.00335, and it has been published in the IEEE Trans. on Information Theory, vol. 62, no. 11, pp. 5973-6006, November 2016.Comment: A conference paper, 5 pages. To be presented in the 2016 ICSEE International Conference on the Science of Electrical Engineering, Nov. 16--18, Eilat, Israel. See https://arxiv.org/abs/1508.00335 for the full paper version, published as a journal paper in the IEEE Trans. on Information Theory, vol. 62, no. 11, pp. 5973-6006, November 201

    Lossy joint source-channel coding in the finite blocklength regime

    Get PDF
    This paper finds new tight finite-blocklength bounds for the best achievable lossy joint source-channel code rate, and demonstrates that joint source-channel code design brings considerable performance advantage over a separate one in the non-asymptotic regime. A joint source-channel code maps a block of kk source symbols onto a length−n-n channel codeword, and the fidelity of reproduction at the receiver end is measured by the probability Ï”\epsilon that the distortion exceeds a given threshold dd. For memoryless sources and channels, it is demonstrated that the parameters of the best joint source-channel code must satisfy nC−kR(d)≈nV+kV(d)Q(Ï”)nC - kR(d) \approx \sqrt{nV + k \mathcal V(d)} Q(\epsilon), where CC and VV are the channel capacity and channel dispersion, respectively; R(d)R(d) and V(d)\mathcal V(d) are the source rate-distortion and rate-dispersion functions; and QQ is the standard Gaussian complementary cdf. Symbol-by-symbol (uncoded) transmission is known to achieve the Shannon limit when the source and channel satisfy a certain probabilistic matching condition. In this paper we show that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the non-asymptotic regime

    Nonasymptotic noisy lossy source coding

    Get PDF
    This paper shows new general nonasymptotic achievability and converse bounds and performs their dispersion analysis for the lossy compression problem in which the compressor observes the source through a noisy channel. While this problem is asymptotically equivalent to a noiseless lossy source coding problem with a modified distortion function, nonasymptotically there is a noticeable gap in how fast their minimum achievable coding rates approach the common rate-distortion function, as evidenced both by the refined asymptotic analysis (dispersion) and the numerical results. The size of the gap between the dispersions of the noisy problem and the asymptotically equivalent noiseless problem depends on the stochastic variability of the channel through which the compressor observes the source.Comment: IEEE Transactions on Information Theory, 201

    A Universal Scheme for Wyner–Ziv Coding of Discrete Sources

    Get PDF
    We consider the Wyner–Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel–Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes

    Key Capacity with Limited One-Way Communication for Product Sources

    Full text link
    We show that for product sources, rate splitting is optimal for secret key agreement using limited one-way communication at two terminals. This yields an alternative proof of the tensorization property of a strong data processing inequality originally studied by Erkip and Cover and amended recently by Anantharam et al. We derive a `water-filling' solution of the communication-rate--key-rate tradeoff for two arbitrarily correlated vector Gaussian sources, for the case with an eavesdropper, and for stationary Gaussian processes.Comment: 5 pages, ISIT 201

    Fixed-length lossy compression in the finite blocklength regime

    Get PDF
    This paper studies the minimum achievable source coding rate as a function of blocklength nn and probability Ï”\epsilon that the distortion exceeds a given level dd. Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by R(d)+V(d)nQ−1(Ï”)R(d) + \sqrt{\frac{V(d)}{n}} Q^{-1}(\epsilon), where R(d)R(d) is the rate-distortion function, V(d)V(d) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and Q−1(Ï”)Q^{-1}(\epsilon) is the inverse of the standard Gaussian complementary cdf
    • 

    corecore